1,521 research outputs found
Find your Way by Observing the Sun and Other Semantic Cues
In this paper we present a robust, efficient and affordable approach to
self-localization which does not require neither GPS nor knowledge about the
appearance of the world. Towards this goal, we utilize freely available
cartographic maps and derive a probabilistic model that exploits semantic cues
in the form of sun direction, presence of an intersection, road type, speed
limit as well as the ego-car trajectory in order to produce very reliable
localization results. Our experimental evaluation shows that our approach can
localize much faster (in terms of driving time) with less computation and more
robustly than competing approaches, which ignore semantic information
What Happened 3 Seconds Ago? Inferring the Past with Thermal Imaging
Inferring past human motion from RGB images is challenging due to the
inherent uncertainty of the prediction problem. Thermal images, on the other
hand, encode traces of past human-object interactions left in the environment
via thermal radiation measurement. Based on this observation, we collect the
first RGB-Thermal dataset for human motion analysis, dubbed Thermal-IM. Then we
develop a three-stage neural network model for accurate past human pose
estimation. Comprehensive experiments show that thermal cues significantly
reduce the ambiguities of this task, and the proposed model achieves remarkable
performance. The dataset is available at
https://github.com/ZitianTang/Thermal-IM
UltraLiDAR: Learning Compact Representations for LiDAR Completion and Generation
LiDAR provides accurate geometric measurements of the 3D world.
Unfortunately, dense LiDARs are very expensive and the point clouds captured by
low-beam LiDAR are often sparse. To address these issues, we present
UltraLiDAR, a data-driven framework for scene-level LiDAR completion, LiDAR
generation, and LiDAR manipulation. The crux of UltraLiDAR is a compact,
discrete representation that encodes the point cloud's geometric structure, is
robust to noise, and is easy to manipulate. We show that by aligning the
representation of a sparse point cloud to that of a dense point cloud, we can
densify the sparse point clouds as if they were captured by a real high-density
LiDAR, drastically reducing the cost. Furthermore, by learning a prior over the
discrete codebook, we can generate diverse, realistic LiDAR point clouds for
self-driving. We evaluate the effectiveness of UltraLiDAR on sparse-to-dense
LiDAR completion and LiDAR generation. Experiments show that densifying
real-world point clouds with our approach can significantly improve the
performance of downstream perception systems. Compared to prior art on LiDAR
generation, our approach generates much more realistic point clouds. According
to A/B test, over 98.5\% of the time human participants prefer our results over
those of previous methods.Comment: CVPR 2023. Project page: https://waabi.ai/ultralidar
- …